Goto

Collaborating Authors

 safety proof


Towards provable probabilistic safety for scalable embodied AI systems

He, Linxuan, Jia, Qing-Shan, Li, Ang, Sang, Hongyan, Wang, Ling, Lu, Jiwen, Zhang, Tao, Zhou, Jie, Zhang, Yi, Wang, Yisen, Wei, Peng, Wang, Zhongyuan, Liu, Henry X., Feng, Shuo

arXiv.org Artificial Intelligence

Embodied AI systems, comprising AI models and physical plants, are increasingly prevalent across various applications. Due to the rarity of system failures, ensuring their safety in complex operating environments remains a major challenge, which severely hinders their large-scale deployment in safety-critical domains, such as autonomous vehicles, medical devices, and robotics. While achieving provable deterministic safety--verifying system safety across all possible scenarios--remains theoretically ideal, the rarity and complexity of corner cases make this approach impractical for scalable embodied AI systems. Instead, empirical safety evaluation is employed as an alternative, but the absence of provable guarantees imposes significant limitations. To address these issues, we argue for a paradigm shift to provable probabilistic safety that integrates provable guarantees with progressive achievement toward a probabilistic safety boundary on overall system performance. The new paradigm better leverages statistical methods to enhance feasibility and scalability, and a well-defined probabilistic safety boundary enables embodied AI systems to be deployed at scale. In this Perspective, we outline a roadmap for provable probabilistic safety, along with corresponding challenges and potential solutions. By bridging the gap between theoretical safety assurance and practical deployment, this Perspective offers a pathway toward safer, large-scale adoption of embodied AI systems in safety-critical applications.


Improved Safe Real-Time Heuristic Search

Cserna, Bence (University of New Hampshire) | Gall, Kevin C. (University of New Hampshire) | Ruml, Wheeler (University of New Hampshire)

AAAI Conferences

Empirically, this optimization lead to 0.5 - 2.5% savings on expansions in our experiments A fundamental concern in real-time planning is the presence (Cserna, Gall, and Ruml 2019). of dead-ends in the state space, from which no goal is reachable. SafeRTS interleaves exploration and safety proofs during Providing real-time heuristic search algorithms that are its planning phase. As a direct consequence, it attempts complete in domains with dead-end states is a challenging safety proofs on nodes that become internal to the LSS by problem. Recently, the SafeRTS algorithm was proposed for the end of the search iteration. As shown in Cserna, Gall, and searching in such spaces (Cserna et al. 2018). SafeRTS exploits Ruml (2019), it would be equally or less difficult to achieve a user-provided predicate to identify safe states, from the same or better safety coverage by doing safety proofs after which a goal is likely reachable, and attempts to maintain a all the LSS expansions. SafeRTS has an anytime behavior backup plan for reaching a safe state at all times.


Improved Safe Real-time Heuristic Search

Cserna, Bence, Gall, Kevin C., Ruml, Wheeler

arXiv.org Artificial Intelligence

A fundamental concern in real-time planning is the presence of dead-ends in the state space, from which no goal is reachable. Recently, the SafeRTS algorithm was proposed for searching in such spaces. SafeRTS exploits a user-provided predicate to identify safe states, from which a goal is likely reachable, and attempts to maintain a backup plan for reaching a safe state at all times. In this paper, we study the SafeRTS approach, identify certain properties of its behavior, and design an improved framework for safe real-time search. We prove that the new approach performs at least as well as SafeRTS and present experimental results showing that its promise is fulfilled in practice.